188 research outputs found

    Low defect densities in molecular beam epitaxial GaAs achieved by isoelectronic In doping

    Full text link
    We have studied the effects of adding small amounts of In (0.2–1.2%) to GaAs grown by molecular beam epitaxy. The density of four electron traps decreases in concentration by an order of magnitude, and the peak intensities of prominent emissions in the excitonic spectra are reduced with increase in In content. Based on the higher surface migration rate of In, compared to Ga, at the growth temperatures it is apparent that the traps and the excitonic transitions are related to point defects. This agrees with earlier observations by F. Briones and D. M. Collins [J. Electron. Mater. 11, 847 (1982)] and B. J. Skromme, S. S. Bose, B. Lee, T. S. Low, T. R. Lepkowski, R‐Y. DeJule, G. E. Stillman, and J. C. M. Hwang [J. Appl. Phys. 58, 4702 (1985)].Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/69821/2/APPLAB-49-8-470-1.pd

    Orientation‐dependent phase modulation in InGaAs/GaAs multiquantum well waveguides

    Full text link
    The electro‐optic effect and phase modulation in In0.2 Ga0.8 As/GaAs multiple quantum wells have been experimentally studied for the first time. The experiments were done with 1.06 and 1.15 ÎŒm photoexcitation which are, respectively, 25 and 115 meV below the electron–heavy hole excitonic resonance. Strong quadratic electro‐optic effect was observed near the excitonic edge in addition to the linear effect. These are characterized by r63 =−1.85×10−19 m/V and (R33 −R13 )=2.9×10−19 m2 /V2 . In addition, we observe a dispersion in the value of r63 . The relative phase shifts are higher in the strained system at 1.06 ÎŒm than in lattice‐matched GaAs/AlGaAs.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70160/2/APPLAB-53-22-2129-1.pd

    Molecular beam epitaxial growth and luminescence of InxGa1−xAs/InxAl1−xAs multiquantum wells on GaAs

    Full text link
    This letter reports the successful molecular beam epitaxial growth of high‐quality InxGa1−xAs/InxAl1−xAs directly on GaAs. In situ observation of dynamic high‐energy electron diffraction oscillations during growth of InxGa1−xAs on GaAs indicates that the average cation migration rates are reduced due to the surface strain. By raising the growth temperature to enhance the migration rate and by using misoriented epitaxy to limit the propagation of threading and screw dislocations, we have grown device‐quality In0.15Ga0.85As/In0.15Al0.85As multiquantum wells on GaAs with a 0.5–1.0 ÎŒm In0.15Ga0.85As buffer layer. The luminescence efficiency of the bound exciton peak increases with misorientation and its linewidth varies from 11 to 15 meV.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/69823/2/APPLAB-51-4-261-1.pd

    Efficient video coding using visual sensitive information for HEVC coding standard

    Get PDF
    The latest high efficiency video coding (HEVC) standard introduces a large number of inter-mode block partitioning modes. The HEVC reference test model (HM) uses partially exhaustive tree-structured mode selection, which still explores a large number of prediction unit (PU) modes for a coding unit (CU). This impacts on encoding time rise which deprives a number of electronic devices having limited processing resources to use various features of HEVC. By analyzing the homogeneity, residual, and different statistical correlation among modes, many researchers speed-up the encoding process through the number of PU mode reduction. However, these approaches could not demonstrate the similar rate-distortion (RD) performance with the HM due to their dependency on existing Lagrangian cost function (LCF) within the HEVC framework. In this paper, to avoid the complete dependency on LCF in the initial phase, we exploit visual sensitive foreground motion and spatial salient metric (FMSSM) in a block. To capture its motion and saliency features, we use the dynamic background and visual saliency modeling, respectively. According to the FMSSM values, a subset of PU modes is then explored for encoding the CU. This preprocessing phase is independent from the existing LCF. As the proposed coding technique further reduces the number of PU modes using two simple criteria (i.e., motion and saliency), it outperforms the HM in terms of encoding time reduction. As it also encodes the uncovered and static background areas using the dynamic background frame as a substituted reference frame, it does not sacrifice quality. Tested results reveal that the proposed method achieves 32% average encoding time reduction of the HM without any quality loss for a wide range of videos

    QMET : A new quality assessment metric for no-reference video coding by using human eye traversal

    Get PDF
    The subjective quality assessment (SQA) is an ever demanding approach due to its in-depth interactivity to the human cognition. The addition of no-reference based scheme could equip the SQA techniques to tackle further challenges. Existing widely used objective metrics-peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) or the subjective estimator-mean opinion score (MOS) requires original image for quality evaluation that limits their uses for the situation having no-reference. In this work, we present a no-reference based SQA technique that could be an impressive substitute to the reference-based approaches for quality evaluation. The High Efficiency Video Coding (HEVC) reference test model (HM15.0) is first exploited to generate five different qualities of the HEVC recommended eight class sequences. To assess different aspects of coded video quality, a group of ten participants are employed and their eye-tracker (ET) recorded data demonstrate closer correlation among gaze plots for relatively better quality video contents. Therefore, we innovatively calculate the amount of approximation of smooth eye traversal (ASET) by using distance, angle, and pupil-size feature from recorded gaze trajectory data and develop a new-quality metric based on eye traversal (QMET). Experimental results show that the quality evaluation carried out by QMET is highly correlated to the HM recommended coding quality. The performance of the QMET is also compared with the PSNR and SSIM metrics to justify the effectiveness of each other.International Conference Image and Vision Computing New Zealan

    A novel quality metric using spatiotemporal correlational data of human eye maneuver

    Get PDF
    The popularly used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain expertise, and many other factors that may actively influence on actual assessment. We therefore, devise a no- reference subjective quality assessment metric by exploiting the nature of human eye browsing on videos. The participants' eye-tracker recorded gaze-data indicate more concentrated eye- traversing approach for relatively better quality. We calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the quality evaluation carried out by QMET demonstrates a strong correlation with the most widely used peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the MOS.DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Application

    Fast coding strategy for HEVC by motion features and saliency applied on difference between successive image blocks

    Get PDF
    Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality

    A novel no-reference subjective quality metric for free viewpoint video using human eye movement

    Get PDF
    The free viewpoint video (FVV) allows users to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position for better 3D visual experience with depth perception. Multiview video coding exploits both texture and depth video information from various angles to encode a number of views to facilitate FVV. The usual practice for the single view or multiview quality assessment is characterized by evolving the objective quality assessment metrics due to their simplicity and real time applications such as the peak signal-to-noise ratio (PSNR) or the structural similarity index (SSIM). However, the PSNR or SSIM requires reference image for quality evaluation and could not be successfully employed in FVV as the new view in FVV does not have any reference view to compare with. Conversely, the widely used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain knowledge, and many other factors that may actively influence on actual assessment. To address this limitation, in this work, we devise a no-reference subjective quality assessment metric by simply exploiting the pattern of human eye browsing on FVV. Over different quality contents of FVV, the participants eye-tracker recorded spatio-temporal gaze-data indicate more concentrated eye-traversing approach for relatively better quality. Thus, we calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the proposed QMET performs better than the SSIM and MOS in terms of assessing different aspects of coded video quality for a wide range of FVV contents.Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics

    Efficient HEVC scheme using motion type categorization

    Get PDF
    High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance

    Demonstration of all‐optical modulation in a vertical guided‐wave nonlinear coupler

    Full text link
    The performance characteristics of an AlGaAs dual waveguide vertical coupler with a nonlinear GaAs/AlGaAs multiquantum well coupling medium are demonstrated. The structure was grown by molecular beam epitaxy and fabricated by optical lithography and ion milling. The nonlinear coupling and modulation behavior is identical to that predicted theoretically. The nonlinear index of refraction and critical input power are estimated to be n2=1.67×10−5 cm2/W and Pc=170 W/cm2, respectively. This device also allows reliable measurement of the nonlinear refractive index for varying quantum well and optical excitation parameters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/69681/2/APPLAB-52-14-1125-1.pd
    • 

    corecore